Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@wraithe@mastodon.social
2024-05-10 06:29:21

Great thread over on Bluesky by one of the (the) main dev for the app, commenting on #JackDorsey comments about why he left BlueSky (company and network)
Remember folks, it ain’t Mastodon v BlueSky (IMO)

@arXiv_mathRT_bot@mastoxiv.page
2024-04-12 08:38:01

This arxiv.org/abs/2011.10019 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csHC_bot@mastoxiv.page
2024-05-10 07:35:24

New Harms Moderated by Immersive Experiences and Breaks in Them
Eugene Kukshinov
arxiv.org/abs/2405.05926 arxiv.org/p…

@arXiv_csGT_bot@mastoxiv.page
2024-03-11 08:31:43

This arxiv.org/abs/1807.05477 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_statML_bot@mastoxiv.page
2024-03-11 07:29:56

An Adaptive Dimension Reduction Estimation Method for High-dimensional Bayesian Optimization
Shouri Hu, Jiawei Li, Zhibo Cai
arxiv.org/abs/2403.05425

@arXiv_mathAG_bot@mastoxiv.page
2024-04-09 06:54:36

Modular curves $X_1(n)$ as moduli spaces of point arrangements and applications
Lev Borisov, Xavier Roulleau
arxiv.org/abs/2404.04364

@memeorandum@universeodon.com
2024-03-29 22:30:59

Remarks by President Biden, President Obama, and President Clinton in a Moderated Conversation with Stephen Colbert at a Campaign Reception | New York, NY (The White House)
whitehouse.gov/briefing-room/s
memeorandum.com/240329/p60#a24

@arXiv_csHC_bot@mastoxiv.page
2024-05-10 07:35:24

New Harms Moderated by Immersive Experiences and Breaks in Them
Eugene Kukshinov
arxiv.org/abs/2405.05926 arxiv.org/p…

@arXiv_mathRT_bot@mastoxiv.page
2024-05-10 08:37:37

This arxiv.org/abs/2108.05733 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csHC_bot@mastoxiv.page
2024-05-01 07:17:21

Dynamic Human Trust Modeling of Autonomous Agents With Varying Capability and Strategy
Jason Dekarske (University of California, Davis), Zhaodan Kong (University of California, Davis), Sanjay Joshi (University of California, Davis)
arxiv.org/abs/2404.19291 arxiv.org/pdf/2404.19291
arXiv:2404.19291v1 Announce Type: new
Abstract: Objective We model the dynamic trust of human subjects in a human-autonomy-teaming screen-based task.
Background Trust is an emerging area of study in human-robot collaboration. Many studies have looked at the issue of robot performance as a sole predictor of human trust, but this could underestimate the complexity of the interaction.
Method Subjects were paired with autonomous agents to search an on-screen grid to determine the number of outlier objects. In each trial, a different autonomous agent with a preassigned capability used one of three search strategies and then reported the number of outliers it found as a fraction of its capability. Then, the subject reported their total outlier estimate. Human subjects then evaluated statements about the agent's behavior, reliability, and their trust in the agent.
Results 80 subjects were recruited. Self-reported trust was modeled using Ordinary Least Squares, but the group that interacted with varying capability agents on a short time order produced a better performing ARIMAX model. Models were cross-validated between groups and found a moderate improvement in the next trial trust prediction.
Conclusion A time series modeling approach reveals the effects of temporal ordering of agent performance on estimated trust. Recency bias may affect how subjects weigh the contribution of strategy or capability to trust. Understanding the connections between agent behavior, agent performance, and human trust is crucial to improving human-robot collaborative tasks.
Application The modeling approach in this study demonstrates the need to represent autonomous agent characteristics over time to capture changes in human trust.